Goto

Collaborating Authors

 scalable global optimization


Scalable Global Optimization via Local Bayesian Optimization

Neural Information Processing Systems

Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition.


Scalable Global Optimization via Local Bayesian Optimization

Neural Information Processing Systems

Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.


Reviews: Scalable Global Optimization via Local Bayesian Optimization

Neural Information Processing Systems

Major * I found this paper to be very exciting, presenting a promising methodology addressing some of the most critical bottlenecks of Bayesian Optimization, with a focus on large data sets (being therefore relevant for high-dimensional BO as well, where sample sizes typically need to be substantially increased with the dimension). So, one is far from filling the space, right? Not using these for some good reason is one thing, but putting it the way it is put here sounds like it is not possible to go batch-sequential with EI... * In the main contributions presented throughout Section 3, two main ideas are confounded here: splitting the data so as to obtain local models AND using TS as infill criterion. Which is (most) responsible for improved performances over the state of the art? Minor (selected points) * Page 1: What does "outputscales" mean?


Reviews: Scalable Global Optimization via Local Bayesian Optimization

Neural Information Processing Systems

The reviewers liked the paper, its results and the analysis. Please address the reviewer comments in the final version, in particular the relation to related work.


Scalable Global Optimization via Local Bayesian Optimization

Neural Information Processing Systems

Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences.


Scalable Global Optimization via Local Bayesian Optimization

Eriksson, David, Pearce, Michael, Gardner, Jacob, Turner, Ryan D., Poloczek, Matthias

Neural Information Processing Systems

Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. We propose the TuRBO algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that TuRBO outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences. Papers published at the Neural Information Processing Systems Conference.